Post-training discriminative pruning for RBMs
نویسندگان
چکیده
منابع مشابه
Discriminative Pruning for Discriminative ITG Alignment
While Inversion Transduction Grammar (ITG) has regained more and more attention in recent years, it still suffers from the major obstacle of speed. We propose a discriminative ITG pruning framework using Minimum Error Rate Training and various features from previous work on ITG alignment. Experiment results show that it is superior to all existing heuristics in ITG pruning. On top of the prunin...
متن کاملGenerative versus discriminative training of RBMs for classification of fMRI images
Neuroimaging datasets often have a very large number of voxels and a very small number of training cases, which means that overfitting of models for this data can become a very serious problem. Working with a set of fMRI images from a study on stroke recovery, we consider a classification task for which logistic regression performs poorly, even when L1or L2regularized. We show that much better ...
متن کاملDiscriminative Training of 150 Million Translation Parameters and Its Application to Pruning
Until recently, the application of discriminative training to log linear-based statistical machine translation has been limited to tuning the weights of a limited number of features or training features with a limited number of parameters. In this paper, we propose to scale up discriminative training of (He and Deng, 2012) to train features with 150 million parameters, which is one order of mag...
متن کاملDiscriminative Pruning of Language Models for Chinese Word Segmentation
This paper presents a discriminative pruning method of n-gram language model for Chinese word segmentation. To reduce the size of the language model that is used in a Chinese word segmentation system, importance of each bigram is computed in terms of discriminative pruning criterion that is related to the performance loss caused by pruning the bigram. Then we propose a step-by-step growing algo...
متن کاملPiecewise HMM discriminative training
This paper address the problem of training HMMs using long files of uninterrupted speech with limited and constant memory requirements. The classical training algorithms usually require limited duration training utterances due to memory constraints for storing the generated trellis. Our solution allows to exploits databases that are transcribed, but not partitioned into sentences, using a slidi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Soft Computing
سال: 2017
ISSN: 1432-7643,1433-7479
DOI: 10.1007/s00500-017-2784-3